Alan Cooper: Geek of the Week

Comments 0

Share to social media

In his well-received book ‘The Inmates are Running the Asylum’ Alan Cooper argued that despite all the appearance to the contrary the people at the technology industry are not the ones in controlling its direction or future.

2176-alagotw1.gif

Instead the programmers and engineers who are put in charge of these companies end up wasting vast amounts of money, squandering customer loyalty and eroding their lead in the market by producing products which no one really needs.

Cooper’s thesis is an entertaining look at how some of the most talented people in technology produce of the worst software.

He should know.  Early on in his career, together with Gordon Eubanks, Cooper developed a business programming language called CBASIC, an early rival to BASIC developed by Bill Gates and Paul Allen. 

2176-alagotw2.gif

Above, the Ruby Team circa Spring 1989. From left to right, Frank Raab, Mike Geary, Alan Cooper, Gary Kratkin and Mark Merker. In the photograph on the right, Alan Cooper receives an award from Bill Gates for his work on VB in 1994. Source.

Alan Cooper many firsts include a visual programming language which Bill Gates said would have a profound effect on Microsoft’s entire product range.  This of course was Visual Basic. 

Now, the President of his eponymous company in San Francisco, Alan spends his time helping people and businesses overcome design challenges and offers training courses in software design and development topics.


RM:
Alan, you began studying architecture at college so how did you learn to program? When did it all start?
AC:
I conceived the crazy idea that learning to program would allow me to get a part-time job so that I could work my way through architecture school. One semester into Introduction to Business Data Processing in 1973 (actually, about one week into it) I knew that computer programming was my calling.
RM:
And from the time microcomputers became available you spotted a market and designed business accounting software. Did you have a feeling that you were at the start of a new age and way of doing business?
AC:
The sense of being in a brave new world was overwhelming and intoxicating. In the mid nineteen-seventies, only a few hundred people in the world knew that everything, everything, was about to change, and I was one of them.

All my life I’ve been a voracious reader. As a youth, I read science fiction to the exclusion of most everything else. It is telling that in the mid-1970s I stopped reading science fiction. The work I was doing with microcomputers was far more futuristic than anything I was reading.

Of course, even though I knew the entire world was changing, I have still been continually surprised by the astounding changes that I see every day. Knowing that the world is about to get crazy doesn’t make it feel sane.

RM:
Comparing how you think about programming now with how you thought when you were starting out, what’s the biggest change in your thinking?
AC:
The agile revolution is truly revolutionary. I’m not talking about the agile-arcana that consultants sell to gullible project managers, but about the real value changes in the way good programmers think.

Good programmers, agile programmers, now understand that it is the responsibility of the practitioners to make successful products and services. The business community isn’t really in the driver’s seat anymore.

The most significant aspect of agile can be seen in the practice of paired programming. Back in the bad old days, when I learned to code, programming was a solitary, macho exercise in sheer mental strength. Programming was an internal monologue and you simply had to work and think harder to solve tough problems.

Today, agile has shown us that programming is best done as an external dialog, where you articulate what you are doing to a peer in the very act of doing it. This changes the emotional dynamic so utterly that it feels to me as though programming has emerged from a dark age.

RM:
When in your thinking process do you get to the point of knowing when to write code?
AC:
You should start coding right now. The problem isn’t when to start coding, it’s knowing how to regard the code you have written.

Code is not an asset, but rather a liability. All code, to some extent, needs to be thrown out and rewritten from scratch. When you can wrap your head around that, you are ready to begin coding.

The value in programming isn’t so much the code you get, but the knowledge you gain as you do it. Early in the process, the knowledge is high and the code is crappy. Later in the process, the knowledge is muted and the code is more valuable. If you don’t discard crappy code, you are doomed. When you can manage that tradeoff properly, it’s time to start coding.

Design is something that helps you get to that balance point earlier, with less code written and discarded. It’s a whole lot easier to write and discard design specs than code.

RM:
At a point in your career it seems you had the thought that software was mostly written for academics, was very complex and not user friendly, which is I think how personas came about. It’s a very good example of lateral thinking but did it surprise you that no one else had thought of doing this?
AC:
Like all overnight successes, it took a decade of work. I created the first real personas in a matter of days, but they were based on thinking and work I had been doing over the course of several years. Several things about personas surprised me but not the fact that they hadn’t occurred to others.

I was surprised, and pleased, by how remarkably effective they were at solving the two most intractable problems in interaction design: 1) knowing what to put in or leave out, and 2) communicating my design thinking.

I was surprised by how hard it was for some practitioners to wrap their heads around the concept. It seemed pretty obvious to me.

I was surprised by how rapidly and universally personas were adopted as an effective design tool by the then-exploding profession of interaction design.

I was surprised by how often and widely personas have been, and still are, misunderstood, misused, abused, and vilified by interaction designers.

RM:
Taking this from a slightly different angle, lots of people have tried to come up with languages or programming systems that allow non-programmers to program. Is this a doomed enterprise do you think, in that it’s not we haven’t found the right syntax for it but people have to learn what is an unnatural act?
AC:
Creating programming systems for non-programmers is one of the oldest memes in the digital world, and it remains as wrong-headed and unworkable today as it ever did.

I distinctly remember the day way back in 1973, as I was on the threshold of a career in programming, standing in the computer center at the College of Marin, and a senior class-man dourly showed me the front page of Computer World magazine, the then-current leading trade publication. In three-inch-high letters the headline said, “3G Languages Make Programmers Obsolete.” My heart sank, as I imagined my chosen career being over before it ever started. Of course, it all turned out to be a mirage.

Countless multitudes of programmers over the years have tried to solve this problem with better, easier, more visual, more conceptual, more encapsulatable languages. None of them have worked, and none of them will work. It isn’t the fault of the language. The problem resides in the wiring of the human mind. Only a very few humans can think in the manner required to write a computer program. That number is a constant, probably on the order of one in a 100,000 people, and it isn’t going to change.

Besides, it’s a silly idea to try to make everyone program. It’s as silly as saying everyone should be able to write advertising copy. There simply isn’t that big a demand for amateur ad copy. It’s far better to have fewer programmers who are really good, than to have lots of hackers fouling up cyberspace.

RM:
If I’ve got the history right it was around the mid-1980s that like many people you decided to make Windows your platform of choice. What was the killer app which persuaded you to do this?
AC:
Dynamic relocation and interprocess communications. There were other multi-tasking systems available. I even wrote one myself in 1982. The problem was that none of them provided these two essential tools necessary for creating useful business applications.

A good operating system needed to allow multiple programs to start and stop, acquire and relinquish memory, and communicate with each other in a practical and useful manner. Windows was the first publicly available operating system that did that.

A lot of breath was wasted over the algorithm used to dispatch multitasking operating systems. Windows was crucified for having a non-preemptive system, but that’s like saying, “This Ferrari is terrible because it’s painted yellow!” Certainly, a preemptive dispatching system is superior, and today all of them, including Windows use that method.

What mattered was that a series of programs could load, run, and shut down, and then other programs could load and run and use the memory previously consumed by their predecessor. That’s what dynamic relocation is all about, and only Windows delivered it back then.

Just in case you were wondering, their user interface was terrible. Actually, all of their interfaces were terrible. The Windows API was one of the worst pieces of amateurish garbage I have ever witnessed, and it remains a landmark in bad API design (This is not hyperbole. It was painfully bad, and only .NET was capable of rescuing it from the damned).

RM:
I read somewhere that the big let-down was that the shell was terrible so you began to write one of your own which you called Tripod. Maybe you can explain the basic problem that you resolved to redesign?
AC:
Yes. The early Windows shell was an embarrassment to the profession.

Only after I began to work with Microsoft did I learn why this was. Before the release of Windows 3.0 (its fifth major release), it was not a strategic product within the organization, it was not fully funded, and it was coded up largely by college student interns.

The tools were weak, the applications pathetic, the API was a joke, and the shell, Oh My God, was that shell bad. Everyone knew it was terrible, but everyone also knew that the shell was the operating system vendor’s responsibility.

Nobody wanted to step onto the tracks in front of the Microsoft train with their own shell. Besides, there were a million applications that needed writing instead. But Microsoft never regarded the shell as important, and it just languished, and I finally got fed up enough with it that I stepped on the rails.

RM:
Can you talk me through how you showed Bill Gates Tripod and the project this then led to?
AC:
I called Tripod a “Shell construction set,” because you could use it to build the shell you wanted. I showed it to lots of software publishing companies, and although they were all impressed, they didn’t want to take on what was clearly Microsoft’s role: the supplier of the shell for Windows.

Finally, I decided to show it to Gates. Although we had met and spoken several times over the years, I didn’t feel that I knew him well enough to just call him up. An old friend of mine worked at Microsoft in the sales department.

He secured an introduction to a mid-level executive there named Gabe Newell, who is now the famously reclusive and successful leader of the game company Steam. I began to show the program to Gabe and after just a couple of minutes he stopped me, declaring that, “Bill has got to see this!” I had an audience with Gates just a few weeks later.

Gates was very impressed with the original Tripod prototype. During the demo, when he saw it for the first time, one of his lieutenants made some critical remarks about it. Before I could reply, Bill began to defend my work. I don’t want to name any names, but the critic was Tandy Trower.

It was clear to see that Gates was immediately sold on the program. At one point he turned to his retinue and asked, “Why can’t we do stuff like this?”

RM:
How did you go about designing the Visual Basic software and did you have a stream of requests for features or was it left up to your team to develop it?
AC:
As soon as we consummated the deal with Microsoft, we rechristened it with a new code name, exchanging Tripod for Ruby. I wrote a detailed specification document explaining what the program was, how it worked, and what it would do. We agreed on that spec and then I hired a small team of bright programmers to build it. Microsoft mostly left us alone during this time.

When we finally delivered the product, it went through Microsoft’s own QA process. We fixed all the bugs they found, and they formally accepted it.

The product was immediately caught in a political squeeze play within Microsoft. To this day I don’t know exactly what happened, as a curtain of silence descended. Finally, a couple of years later, they released Visual Basic. They had converted my shell program intended for users into a language intended for programmers.

Even though Visual Basic is the only product that Microsoft has ever shipped that was both a commercial and critical success on version 1.0, they never approached me again for anything. In about 1990, I brought another excellent idea to Gates, offering to build it for him. He dismissed the proposal, claiming that his people were already working on something similar (I have never seen actual evidence of this).

RM:
Do you think overall that software is getting better? Are we on a trajectory where we learn enough lessons from the past and come up with enough new ideas?
AC:
There are some good things in software. I particularly like the 3D graphics libraries that allow products like SketchUp to exist.

On the other hand, HTML set programming back about 20 years, and it still hasn’t recovered.

In those same years, our software technology has made a few gains and suffered a few losses and today we find ourselves in substantially the same boat we were in when I first learned how to program.

There are certainly more people writing software these days, but the ratio of good software to bad remains overwhelmingly in favor of the bad, and the number of fields that remain unaddressed by any software at all is still enormous.

Computer hardware is a product of the industrial age, and it benefits from industrial processes. Software is a post-industrial product, and industrial processes only hinder it. We, as a society, have not really come to grips with that reality. It is arguable whether or not we ever will.

RM:
Are there other skills that are not directly related to programming that you feel have improved your programming or that are valuable to have as a programmer? Are there characteristics of good programmers?
AC:
Cognitive psychology comes to mind. For the last 20 years I have been a designer of software and not a programmer of it. There’s lots of value to be had just in stopping programming.
RM:
What would you like your legacy to be? And in what ways do you think you have influenced the software industry as a technologist?
AC:
I hope to be remembered as someone who chose early in my life to devote everything to the design, development, and deployment of good quality software. I hope that I have inspired others to do the same.

Load comments

About the author

Richard Morris

See Profile

Richard Morris is a journalist, author and public relations/public affairs consultant. He has written for a number of UK and US newspapers and magazines and has offered strategic advice to numerous tech companies including Digital Island, Sony and several ISPs. He now specialises in social enterprise and is, among other things, a member of the Big Issue Invest advisory board. Big Issue Invest is the leading provider to high-performing social enterprises & has a strong brand name based on its parent company The Big Issue, described by McKinsey & Co as the most well known and trusted social brand in the UK.

Richard Morris's contributions